21 research outputs found
When to be Discrete: Analyzing Algorithm Performance on Discretized Continuous Problems
The domain of an optimization problem is seen as one of its most important
characteristics. In particular, the distinction between continuous and discrete
optimization is rather impactful. Based on this, the optimizing algorithm,
analyzing method, and more are specified. However, in practice, no problem is
ever truly continuous. Whether this is caused by computing limits or more
tangible properties of the problem, most variables have a finite resolution.
In this work, we use the notion of the resolution of continuous variables to
discretize problems from the continuous domain. We explore how the resolution
impacts the performance of continuous optimization algorithms. Through a
mapping to integer space, we are able to compare these continuous optimizers to
discrete algorithms on the exact same problems. We show that the standard
-CMA-ES fails when discretization is added to the problem
Computing Star Discrepancies with Numerical Black-Box Optimization Algorithms
The star discrepancy is a measure for the regularity of a finite
set of points taken from . Low discrepancy point sets are highly
relevant for Quasi-Monte Carlo methods in numerical integration and several
other applications. Unfortunately, computing the star discrepancy
of a given point set is known to be a hard problem, with the best exact
algorithms falling short for even moderate dimensions around 8. However,
despite the difficulty of finding the global maximum that defines the
star discrepancy of the set, local evaluations at selected points
are inexpensive. This makes the problem tractable by black-box optimization
approaches.
In this work we compare 8 popular numerical black-box optimization algorithms
on the star discrepancy computation problem, using a wide set of
instances in dimensions 2 to 15. We show that all used optimizers perform very
badly on a large majority of the instances and that in many cases random search
outperforms even the more sophisticated solvers. We suspect that
state-of-the-art numerical black-box optimization techniques fail to capture
the global structure of the problem, an important shortcoming that may guide
their future development.
We also provide a parallel implementation of the best-known algorithm to
compute the discrepancy.Comment: To appear in the Proceedings of GECCO 202
Per-run Algorithm Selection with Warm-starting using Trajectory-based Features
Per-instance algorithm selection seeks to recommend, for a given problem
instance and a given performance criterion, one or several suitable algorithms
that are expected to perform well for the particular setting. The selection is
classically done offline, using openly available information about the problem
instance or features that are extracted from the instance during a dedicated
feature extraction step. This ignores valuable information that the algorithms
accumulate during the optimization process.
In this work, we propose an alternative, online algorithm selection scheme
which we coin per-run algorithm selection. In our approach, we start the
optimization with a default algorithm, and, after a certain number of
iterations, extract instance features from the observed trajectory of this
initial optimizer to determine whether to switch to another optimizer. We test
this approach using the CMA-ES as the default solver, and a portfolio of six
different optimizers as potential algorithms to switch to. In contrast to other
recent work on online per-run algorithm selection, we warm-start the second
optimizer using information accumulated during the first optimization phase. We
show that our approach outperforms static per-instance algorithm selection. We
also compare two different feature extraction principles, based on exploratory
landscape analysis and time series analysis of the internal state variables of
the CMA-ES, respectively. We show that a combination of both feature sets
provides the most accurate recommendations for our test cases, taken from the
BBOB function suite from the COCO platform and the YABBOB suite from the
Nevergrad platform
The evolutionary rewiring of ubiquitination targets has reprogrammed the regulation of carbon assimilation in the pathogenic yeast Candida albicans
Date of Acceptance: 13/11/2012 This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-ShareAlike 3.0 Unported license, which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original author and source are credited. Correction for Sandai et al., The Evolutionary Rewiring of Ubiquitination Targets Has Reprogrammed the Regulation of Carbon Assimilation in the Pathogenic Yeast Candida albicans published 20-01-2015 DOI: 10.1128/mBio.02489-14Peer reviewedPublisher PD
Trajectory-based Algorithm Selection with Warm-starting
International audienceLandscape-aware algorithm selection approaches have so far mostly been relying on landscape feature extraction as a preprocessing step, independent of the execution of optimization algorithms in the portfolio. This introduces a significant overhead in computational cost for many practical applications, as features are extracted and computed via sampling and evaluating the problem instance at hand, similarly to what the optimization algorithm would perform anyway within its search trajectory. As suggested in [Jankovic et al., EvoAPP 2021], trajectory-based algorithm selection circumvents the problem of costly feature extraction by computing landscape features from points that a solver sampled and evaluated during the optimization process. Features computed in this manner are used to train algorithm performance regression models, upon which a per-run algorithm selector is then built. In this work, we apply the trajectory-based approach onto a portfolio of five algorithms. We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances after a fixed budget of function evaluations. We rely on landscape features of the problem instance computed using one portion of the aforementioned budget of the same function evaluations. Moreover, we consider the possibility of switching between the solvers once, which requires them to be warm-started, i.e. when we switch, the second solver continues the optimization process already being initialized appropriately by making use of the information collected by the first solver. In this new context, we show promising performance of the trajectory-based per-run algorithm selection with warm-starting
Trajectory-based Algorithm Selection with Warm-starting
Landscape-aware algorithm selection approaches have so far mostly been
relying on landscape feature extraction as a preprocessing step, independent of
the execution of optimization algorithms in the portfolio. This introduces a
significant overhead in computational cost for many practical applications, as
features are extracted and computed via sampling and evaluating the problem
instance at hand, similarly to what the optimization algorithm would perform
anyway within its search trajectory. As suggested in Jankovic et al. (EvoAPPs
2021), trajectory-based algorithm selection circumvents the problem of costly
feature extraction by computing landscape features from points that a solver
sampled and evaluated during the optimization process. Features computed in
this manner are used to train algorithm performance regression models, upon
which a per-run algorithm selector is then built.
In this work, we apply the trajectory-based approach onto a portfolio of five
algorithms. We study the quality and accuracy of performance regression and
algorithm selection models in the scenario of predicting different algorithm
performances after a fixed budget of function evaluations. We rely on landscape
features of the problem instance computed using one portion of the
aforementioned budget of the same function evaluations. Moreover, we consider
the possibility of switching between the solvers once, which requires them to
be warm-started, i.e. when we switch, the second solver continues the
optimization process already being initialized appropriately by making use of
the information collected by the first solver. In this new context, we show
promising performance of the trajectory-based per-run algorithm selection with
warm-starting
Improving Comprehension Efficiency of High Content Screening Data Through Interactive Visualizations
In this study, an experiment is conducted to measure the performance in speed and accuracy of interactive visualizations. A platform for interactive data visualizations was implemented using Django, D3, and Angular. Using this platform, a questionnaire was designed to measure a difference in performance between interactive and noninteractive data visualizations. In this questionnaire consisting of 12 questions, participants were given tasks in which they had to identify trends or patterns. Other tasks were directed at comparing and selecting algorithms with a certain outcome based on visualizations. All tasks were performed on high content screening data sets with the help of visualizations. The difference in time to carry out tasks and accuracy of performance was measured between a group viewing interactive visualizations and a group viewing noninteractive visualizations. The study shows a significant advantage in time and accuracy in the group that used interactive visualizations over the group that used noninteractive visualizations. In tasks comparing results of different algorithms, a significant decrease in time was observed in using interactive visualizations over noninteractive visualizations
Improving Comprehension Efficiency of High Content Screening Data Through Interactive Visualizations
In this study, an experiment is conducted to measure the performance in speed and accuracy of interactive visualizations. A platform for interactive data visualizations was implemented using Django, D3, and Angular. Using this platform, a questionnaire was designed to measure a difference in performance between interactive and noninteractive data visualizations. In this questionnaire consisting of 12 questions, participants were given tasks in which they had to identify trends or patterns. Other tasks were directed at comparing and selecting algorithms with a certain outcome based on visualizations. All tasks were performed on high content screening data sets with the help of visualizations. The difference in time to carry out tasks and accuracy of performance was measured between a group viewing interactive visualizations and a group viewing noninteractive visualizations. The study shows a significant advantage in time and accuracy in the group that used interactive visualizations over the group that used noninteractive visualizations. In tasks comparing results of different algorithms, a significant decrease in time was observed in using interactive visualizations over noninteractive visualizations
Results from the Joint Nevergrad and IOHprofiler Open Optimization Competition
Volume 14, issue 4SIGEVOlution newsletter of the ACM Special Interest Group on Genetic and Evolutionary Computatio